Master artificial intelligence prompting and usage
You're Not Behind (Yet): How to Learn AI in 17 Minutes
This expert guide presents a seven-step roadmap for beginners to master AI communication quickly by understanding that generative systems predict the most probable next word rather than recalling stored facts. The core instruction is to abandon vague inputs and utilize specific frameworks, starting with the AIM framework (Actor, Input, Mission) to structure prompts and the MAP framework to provide essential context for better reasoning. To achieve expert-level results, users must debug their thinking through iteration and actively steer the model toward expert knowledge, thereby preventing superficial or generic responses. Finally, the guide insists that users must verify claims using tools like cross-model verification to separate factual knowledge from confident errors, and use the OCEAN framework to refine and personalize outputs.
The AIM, MAP, and OCEAN frameworks elevate basic prompting—which often results in vague guesses and generic outputs—into expert-level interaction by introducing structure, context, and refinement (or "taste").
By applying the AIM and MAP frameworks, you can join the top 10% of AI users. Applying the OCEAN framework allows you to hit the absolute expert level.
The AIM framework transforms vague requests into targeted, computable instructions, helping the AI understand and compute your intent rather than merely trying to comprehend it.
AIM stands for:
Using this three-part structure helps the model understand, compute, and reason with the prompt, often resulting in outputs that are at least five or ten times better than before.
The MAP framework provides the essential context that grounds the AI model, which otherwise resides in a massive mathematical space filled with billions of numbers. Context serves as "the map" that guides the AI on where to look and what matters.
MAP stands for:
The better the memory, assets, and external actions provided, the richer the context given to the AI, which results in better AI reasoning and response.
Once basic structure (AIM) and context (MAP) are established, the OCEAN framework is used to develop "tastes" and turn generic answers into insights that sound personalized and original. It encourages you to treat the AI like a sparring partner, pushing back and sharpening both your and the AI's thinking.
OCEAN stands for:
By using OCEAN, basic, "junk food" AI outputs that sound like everyone else's are transformed into thoughtful, unique, and critically refined insights.
These frameworks collectively move the interaction away from relying on the AI's probabilistic "guessing machine"—which produces vague guesses for vague prompts—to a structured, contextualized, and iterative conversation where you and the AI are learning together. This transition moves you from passive consumer of AI output to active co-creator.
The 7-Step Roadmap: Master AI in 30 Days is designed to help beginners master AI like the top 1%. The initial steps, Step 1 and Step 2, occur during Week 1 and focus on Learning the Basics by mastering communication with the AI and selecting a foundational tool.
The first week of the roadmap starts with learning "machine English". This skill is essential because most people interact with generative AI systems like ChatGPT or Gemini incorrectly, treating them like a person, which is described as a "huge mistake".
Generative AI systems, such as ChatGPT or Gemini, "don't actually understand our language; they predict it". This prediction process is what sets these models apart from traditional search engines and dictates how you must communicate with them.
The example of "Humpty Dumpty sat on a" illustrates this: just as your brain predicts "wall," the AI weighs all options ("party," "day," "chocolate") and selects the most likely outcome, "fall," to finish the line. This probabilistic nature is why AI "can feel so smart but also so alien".
Key Insight: Recognizing that AI is a "guessing machine" fundamentally changes how you must interact with it. Because the AI is a guessing machine, the quality of the output directly correlates with the quality of the input.
To understand how AI generates language, you need to understand tokenization and embedding space, which are the mechanical foundations of how generative models process and predict text.
Tokenization: The AI breaks down text into smaller parts called tokens. A token might be a word or part of a word. For example, "Humpty Dumpty" is split into the tokens "Humpty" and "Dumpty".
Vectors: Each token is then converted into a multi-dimensional list of numbers known as vectors. These numbers represent the token in a way the AI can mathematically compute.
Embedding Space: These vectors are placed within a massive mathematical space called the embedding space. This space is where the AI "lives" and operates. In this space:
The embedding space is built from billions of parameters that have been trained on massive amounts of text data. The AI uses this space to compute which tokens are most likely to come next in a sequence, given the surrounding context.
Once the AI has placed tokens into the embedding space, it generates responses using probability and proximity—the core mechanism of how it predicts what comes next.
Proximity in Embedding Space: The AI examines which tokens are nearby in the embedding space based on the context provided in the prompt. Because similar concepts live closer together, the AI can quickly locate the most relevant candidates.
Probability Calculation: The AI then calculates the probability of each candidate token being the correct next word. The model has been trained to assign higher probabilities to sequences that commonly appear together in the training data.
Generated on the Fly: The answer is generated token by token, on the fly. The AI does not retrieve pre-stored answers or facts; rather, it builds the answer dynamically based on the probabilities and proximity in the embedding space.
For example, given "Humpty Dumpty had a great," the AI weighs all possible next tokens (e.g., "fall," "day," "party") and selects the one with the highest probability—in this case, "fall"—because that phrase commonly appears in the training data.
To create sharper, more structured prompts, the AIM framework is introduced. This framework transforms a simple request into a structure the model can understand, compute, and reason with. Using the AIM structure transforms prompts and can make results at least five or 10 times better than before.
The first component of the AIM Framework is Actor, which involves telling the model what persona or role it should adopt. This assignment of a specific role or expertise area is crucial for grounding the AI's response in a relevant domain of knowledge.
Purpose: By defining an actor, you direct the AI to access the specific "area" of its embedding space that contains the knowledge, tone, and style associated with that persona. Instead of a generic or unfocused response, the AI tailors its output to match the expertise and perspective of the assigned role.
Example: Instead of a vague request like "fix my resume," you might instruct the AI: "You are the world's most sought-after résumé editor and business writer. You've reviewed thousands of résumés that led to interviews at top tech companies."
This instruction situates the AI within the expertise of a professional résumé editor, which helps it compute a more targeted, useful response.
The second component of the AIM Framework is Input, which requires you to give the model the necessary context and data to ground its response in reality.
Purpose: Providing input ensures that the AI has the specific information it needs to generate a relevant and useful answer. Without sufficient input, the AI is forced to guess broadly, often resulting in generic or off-target responses.
Example: Following the actor assignment, you might specify the context: "I'm attaching my resume and the job description for a senior product manager role at a fintech company."
This grounding information allows the AI to focus its recommendations on the specific role and industry, making the output far more applicable to your actual situation.
The third component of the AIM Framework is Mission, which defines precisely what you want the model to do. This clarifies the specific goal of the interaction.
Purpose: The mission component ensures the AI understands the desired output format and the specific task it needs to accomplish. A clear mission prevents the AI from wandering or producing irrelevant content.
Example: Continuing the résumé improvement example, you might define the mission: "Review it and give me a bullet list of 10 specific ideas on how to improve clarity, measurable impact, and alignment with the role. The goal is to help me build the best resume that gets me hired."
This explicit mission statement ensures the AI knows exactly what deliverable you expect and what criteria to optimize for.
Once you understand how to communicate using machine English, the next part of Week 1 is to "pick your instrument".
While the specific choice "doesn't matter," suggested foundational models include:
By deeply exploring one model, you train your brain to see structures and patterns. Each model has:
Week 1 Goal: By the end of Week 1, you should be comfortable writing a structured prompt using the AIM framework without conscious effort.
To ensure smart outputs, you must feed the AI context, which acts as a map to navigate the complex mathematical space within the model. The framework for context is MAP.
Context is vital because without it, even the world's smartest AI will sound clueless. Using both the AIM and MAP frameworks puts you in the top 10% of AI users.
Memory refers to the continuity provided by the conversation history or notes that carry over from previous chat sessions with the AI. This component ensures that the AI has access to the ongoing narrative of your interaction.
Purpose: Memory helps the AI understand the broader context of your request by referencing what has been discussed previously. Without memory, each prompt is treated as an isolated event, forcing you to repeat information.
Implementation: To build continuity, you can:
Assets are the files, data, or resources that you attach or copy-paste into your prompt. These assets help to ground the model in reality by providing concrete, specific information.
Purpose: Assets ensure that the AI is working with actual data rather than making broad generalizations. By providing documents, spreadsheets, code snippets, or other resources, you give the AI a tangible foundation for its reasoning.
Examples:
Actions are the tools that the model can call to do work beyond generating text. These capabilities expand the AI's utility from a text generator to an active assistant.
Purpose: Actions allow the AI to gather information, execute tasks, and interact with external systems, making it far more powerful and useful for complex workflows.
Examples of Actions:
The Prompt is the instruction itself—the actual question or command you're giving to the AI. While this might seem straightforward, the quality of the prompt is dramatically improved when it's supported by Memory, Assets, and Actions.
Purpose: The prompt is the culmination of all the context you've provided. With rich memory, relevant assets, and available actions, even a simple prompt can yield sophisticated results.
Key Insight: The better you incorporate memory, assets, and external actions, the richer the context you give the AI, which leads to better AI reasoning and response.
Achieving expert level requires debugging your thinking. When you don't get the desired answer, the problem is usually your thinking, not the AI. Prompting requires iterating, not just typing.
The fundamental principle of Step 4 is to assume the fault lies with your prompting, not with the AI model. This mindset shift is crucial for improvement.
Why This Assumption Matters:
What to Check: When output is weak, ask yourself:
You can also ask the AI to explain its logic and reasoning chain to learn how it thinks. This iteration process teaches you how to understand the model and teaches the model how to understand you.
To improve your prompting through iteration, three specific patterns or "cheat codes" are provided. These patterns are proven techniques for refining AI output.
Chain of Thought is a prompting pattern that asks the model to show its reasoning process before delivering a final answer.
Instruction: "Think step by step, show your reasoning, then give me the final concise answer."
Purpose:
The Verifier Pattern instructs the AI to seek clarification before attempting to answer, ensuring it fully understands what you need.
Instruction: "Ask me clarifying questions one at a time before attempting the answer again."
Purpose:
The Refinement Pattern asks the AI to improve your prompt itself, offering better versions for you to choose from.
Instruction: "Propose two sharper versions of the question and ask me which one I prefer."
Purpose:
AI often produces generic answers when prompted vaguely. To avoid mediocrity, you must direct the model toward the "sharper edges of its brain" by navigating it toward experts, frameworks, and depth.
When you ask the AI a vague question without guidance, it tends to produce "generic, mid, superficial" outputs. This happens because the AI defaults to the most common, widely-applicable information in its training data—what might be described as the "center" of the embedding space.
The Problem:
The Solution: You must actively steer the AI away from generic responses and toward specific expertise.
To access deeper, more sophisticated outputs, you need to guide the AI toward specific areas of expertise and mastery within its knowledge base.
The most effective way to steer toward mastery is to explicitly reference specific experts, established frameworks, or relevant research in your prompt.
Example: Instead of asking a generic question like "How can I improve team creativity?", you could ask:
"Explain using ideas from Pixar's brain trust, Satya Nadella's strategy, and Harvard's research on psychological safety."
Why This Works:
If you're not sure which experts, frameworks, or research to reference, you can ask the AI to suggest them first.
Two-Step Process:
Benefits:
Because generative AI is designed to generate things, it can be confidently wrong. You must verify the output, critiquing rather than consuming the information.
The fundamental nature of generative AI means it creates plausible-sounding text based on patterns, not fact-checking. This makes verification essential.
Why Verification is Critical:
Mindset Shift: By the end of the third week, you should feel more in control of the output. You are not accepting the AI's word as final; you are verifying, questioning, and refining.
Five specific verification methods are provided to help you rigorously check AI output:
This method involves asking the AI to explicitly state all the assumptions it made in generating its response, and to rank its confidence in each one.
Instruction: "List and rank every assumption you made."
Purpose:
This method requires the AI to provide specific citations for its claims, including verifiable details.
Instruction: "Cite two independent sources (including title, URL, and a quote) for each major claim."
Purpose:
This method pushes the AI to present opposing viewpoints or contradictory evidence, ensuring a balanced perspective.
Instruction: "Find a credible source that disagrees with your answer and explain the dependencies."
Purpose:
This method requires the AI to show its work, particularly for any numerical claims or calculations.
Instruction: "Recompute every figure and show your math or code."
Purpose:
This method involves using multiple AI models to verify each other's claims, creating a system of checks and balances.
Instruction: "Run the prompt in different models (ChatGPT, Gemini, Claude) and then ask one model to critique or verify the claims made by another."
Purpose:
The final step focuses on making AI output sound like you, rather than sounding generic (like "junk food output"). You should treat the AI like a sparring partner—pushing back and sharpening both your and the AI's thinking.
The OCEAN Framework turns generic answers into tasteful insights. This framework is what elevates you to the absolute expert level of AI interaction.
Original is the first component of the OCEAN Framework, which is the methodology used for executing Step 7: Developing Tastes (Week 4). This step is focused on transforming generic AI output into personalized, insightful content by demanding novelty and multiple perspectives.
The Purpose of O: Original
The overarching goal of Step 7 is to move past the production of routine or "junk food output" and generate results that sound distinctly like you, reflecting your unique "tastes". The Original component specifically addresses the AI's tendency to produce conventional, predictable ideas.
By pushing for originality, you force the AI to explore less-traveled areas of its knowledge base and generate responses that stand out from typical answers. This is a key element of treating the AI as a "sparring partner"—arguing with it and pushing back to sharpen both your and the AI's thinking.
The Specific Instruction
To ensure the output is original, you must use a prompt that explicitly demands nonobvious thinking:
This instruction serves multiple functions:
Context within the OCEAN Framework
Original is the first step in a sequence of refinements:
By demanding originality from the start, you establish a foundation that prevents the rest of the OCEAN process from polishing generic ideas. This practice is part of the overarching idea that "you're not just training the model you are training you".
Concrete is the second component of the OCEAN Framework, which is the systematic approach for implementing Step 7: Developing Tastes (Week 4). This component is essential for grounding the AI's original ideas in verifiable, tangible reality.
The Purpose of C: Concrete
The overall goal of Step 7 is to generate output that moves beyond "junk food" responses and creates material that sounds distinctly like you. The Concrete component specifically combats the AI's tendency to speak in abstractions and generalities.
By requiring names, examples, and numbers, you ensure that every claim the AI makes is backed by specific, verifiable details. This transforms vague assertions into substantive arguments that can be evaluated, verified, and trusted.
The Specific Instruction
To make the output concrete, you must use a prompt that demands specificity:
This instruction ensures that:
Context within the OCEAN Framework
C: Concrete builds directly on the foundation of originality and prepares for evidence and assertion:
By demanding concrete details, you act as a "sparring partner", actively pushing back against the AI's generic tendencies and ensuring that the final output is not only unique but also empirically solid. This process is part of the broader self-improvement mandate of Week 4: "you're not just training the model you are training you".
Evident is the third component of the OCEAN Framework, which is the methodology used for executing Step 7: Developing Tastes (Week 4). This step is focused on transforming generic AI output into personalized, insightful content by demanding transparency and logical support.
The Purpose of E: Evident
The overarching goal of Step 7 is to move past the production of routine or "junk food output" and generate results that sound distinctly like you, reflecting your unique "tastes". The Evident component ensures that the sophisticated ideas generated through the OCEAN process are logically sound and transparently supported.
By requiring the AI to make its reasoning visible and confirm it has "enough evidence," you push the AI to move beyond a summary and explicitly demonstrate the path it took to reach its conclusion. This is a crucial element of treating the AI as a "sparring partner"—arguing with it and pushing back to sharpen both your and the AI's thinking.
The Specific Instruction
To ensure the output is evident, you provide a specific, multi-part prompt that demands transparency:
This instruction serves two critical functions:
Context within the OCEAN Framework
E: Evident is situated within the OCEAN sequence to ensure that the novel claims are thoroughly substantiated:
By consistently demanding visible reasoning and evidence, you ensure that the output is not just smart, but verifiable and critically robust. This active engagement during Week 4 emphasizes that "you're not just training the model you are training you".
Assertive is the fourth component of the OCEAN Framework, which is the foundational methodology for implementing Step 7: Developing Tastes (Week 4). This component is crucial for moving AI output from generic summaries to critical, personal, and insightful material.
The Purpose of A: Assertive
The primary goal of Step 7 is to generate AI output that sounds distinctly "like you". By the time you reach this stage, you are transitioning from generating competent, verified output to injecting personal "tastes".
The Assertive component specifically addresses the AI's tendency to provide neutral, non-committal answers. By demanding a defensible stance, you ensure the output:
This process is a key element of treating the AI as a "sparring partner"—arguing with it and pushing back to sharpen the thinking of both the model and yourself.
The Specific Instruction
To make the AI output Assertive, you must use a prompt that compels the model to commit to a thesis and defend it:
This rigorous instruction ensures that the final output possesses critical depth by requiring the AI to:
Context within the OCEAN Framework
As the fourth letter in OCEAN, the Assertive step builds directly upon the groundwork established by the preceding elements:
By demanding an assertive stance, you ensure the content moves beyond mere information synthesis into the realm of critical insight, which is necessary to "turn generic answers into tasteful insights". This practice is part of the overarching idea that through this rigorous process, "you're not just training the model you are training you".
Narrative is the fifth and final component of the OCEAN Framework, which is the systematic approach for implementing Step 7: Developing Tastes (Week 4). This step ensures that the rich, insightful content developed during Week 4 is presented effectively and cohesively.
The Purpose of N: Narrative
Step 7 is dedicated to generating output that moves beyond the "same junk food output everyone else gets" and instead creates material that sounds specifically "like you". After you have rigorously guided the AI to produce original, concrete, evident, and assertive claims, the Narrative component focuses on refining the presentation.
The goal is to ensure the final output "flows" well and is "tight". By demanding a narrative structure, you turn the compilation of facts and arguments into a compelling, digestible story. This is a vital part of injecting your "tastes" into the output, ensuring the content is not just accurate and insightful, but also beautifully composed.
The Specific Instruction
To implement the Narrative component, you take on the role of a director or editor, guiding the AI's structure:
This instruction gives you control over the storytelling elements of the output, such as the hook, problem, insight, proof, and actions. This ensures the final output is cohesive and maintains a high-quality flow.
Context within the OCEAN Framework
Narrative serves as the culminating step in the OCEAN Framework, bringing together all the critical elements developed previously:
By using the OCEAN framework and treating the AI as a "sparring partner" throughout Week 4, you transform "generic answers into tasteful insights". The final Narrative step ensures that your mastery—developed through every prompt, revision, and judgment—results in output that is not only factually superior but also structurally masterful. This rigorous process means that "you're not just training the model you are training you".
You're Not Behind (Yet): How to Learn AI in 17 Minutes
The provided transcript from a YouTube video outlines a comprehensive, seven-step, 30-day roadmap for users seeking to master artificial intelligence prompting and usage. The guide introduces structured prompting frameworks, including AIM (Actor, Input, Mission) to sharpen intent and MAP for providing essential context (Memory, Assets, Actions). Further steps focus on iterative prompt refinement, moving beyond generic answers by steering the model toward specific expert source. The final phase involves developing a unique voice and "taste" for AI results by applying the OCEAN framework (Original, Concrete, Evident, Assertive, Narrative).